110 research outputs found

    Formulating the history matching problem with consistent error statistics

    Get PDF
    It is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.publishedVersio

    Deltidsstillinger pÄ sykehjem - lederens interne vikarpool?

    Get PDF
    Bakgrunn: SĂ„ mange som 70-90 % av ansatte i kommunal helsetjeneste jobber i deltidsstilling. Flere omorganiseringer de senere Ă„r har hatt som mĂ„l Ă„ effektivisere helsetjenesten og utnytte begrensede helseressurser bedre, samtidig som det stilles stadig stĂžrre forventninger til kvaliteten pĂ„ helsetjenestene. Den utstrakte bruken av deltidsstillinger pĂ„virker den daglige driften av sykehjemsavdelinger og presset pĂ„ hvordan ledere drifter avdelingene er stort. Det er skrevet mange utredninger om bruk av deltids- og heltidsstillinger i helsetjenesten, men svĂŠrt fĂ„ av disse har hatt fokus pĂ„ helselederne. Hensikt: Denne masterstudien sĂžker Ă„ finne ut hvordan ledere pĂ„ sykehjem bruker deltidsstillinger i sine avdelinger, om de foretrekker numerisk fleksibilitet eller funksjonell fleksibilitet. Studien sĂžker ogsĂ„ Ă„ kartlegge ledernes beslutningsgrunnlag nĂ„r de velger om de skal satse pĂ„ bruk av deltids- eller heltidsstillinger, dette vurderes ved hjelp av et lite knippe beslutningsmodeller. Materiale: Intervjudata fra seks avdelingsledere fra seks ulike sykehjem i Bergen kommune. Metode: Kvalitativ metode med individuelle intervju. Resultat: Lederne foretrekker stor grad av numerisk fleksibilitet i avdelingene. Å disponere et hĂžyt antall deltidsansatte gir lederne en «intern vikarpool» som de kan hente inn ansatte fra pĂ„ kort varsel ved sykefravĂŠr. Numerisk fleksibilitet foretrekkes ogsĂ„ av Ăžkonomiske grunner for Ă„ drifte avdelingene billigst mulig. Lederne tilpasser bemanningen i forhold til pasientenes pleiebehov og dĂžgnrytme. De Ăžnsker Ă„ fremme funksjonell fleksibilitet i avdelingene, men sier det er vanskelig Ă„ prioritere funksjonell fleksibilitet pĂ„ grunn av krav til bemanning i helger og begrensninger i turnusoppsett. Lederne opplever en alternativlĂžshet til dagens bruk av deltidsstillinger. Deres beslutningsgrunnlag passer med en regelbasert beslutningsmodell. De har overtatt etablerte turnusoppsett som er «arvet» fra deres forgjengere og de fĂžlger lover og regler nĂ„r de tar beslutninger. Det er lite kjennskap til alternative tiltak som kan redusere den ustrakte bruken av deltidsstillinger, lederne har derfor vanskelig for Ă„ ta annet enn rutinebeslutninger. Konklusjon: Lederne i denne studien Ăžnsker en avdeling med bĂ„de numerisk og funksjonell fleksibilitet. De seks avdelingene driftes med en stor grad av numerisk fleksibilitet og lederne opplever at strukturelle begrensninger gjĂžr det vanskelig Ă„ prioritere funksjonell fleksibilitet. De har liten kjennskap til alternativer og denne alternativlĂžsheten gjĂžr det vanskelig eller umulig for lederne Ă„ ta valg som kan redusere den utstrakte bruken av deltidsstillinger.HELVID650MATF-HEVI

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation

    A Stochastic Covariance Shrinkage Approach in Ensemble Transform Kalman Filtering

    Get PDF
    The Ensemble Kalman Filters (EnKF) employ a Monte-Carlo approach to represent covariance information, and are affected by sampling errors in operational settings where the number of model realizations is much smaller than the model state dimension. To alleviate the effects of these errors EnKF relies on model-specific heuristics such as covariance localization, which takes advantage of the spatial locality of correlations among the model variables. This work proposes an approach to alleviate sampling errors that utilizes a locally averaged-in-time dynamics of the model, described in terms of a climatological covariance of the dynamical system. We use this covariance as the target matrix in covariance shrinkage methods, and develop a stochastic covariance shrinkage approach where synthetic ensemble members are drawn to enrich both the ensemble subspace and the ensemble transformation

    p-Kernel Stein Variational Gradient Descent for Data Assimilation and History Matching

    Get PDF
    A Bayesian method of inference known as “Stein variational gradient descent” was recently implemented for data assimilation problems, under the heading of “mapping particle filter”. In this manuscript, the algorithm is applied to another type of geoscientific inversion problems, namely history matching of petroleum reservoirs. In order to combat the curse of dimensionality, the commonly used Gaussian kernel, which defines the solution space, is replaced by a p-kernel. In addition, the ensemble gradient approximation used in the mapping particle filter is rectified, and the data assimilation experiments are re-run with more relevant settings and comparisons. Our experimental results in data assimilation are rather disappointing. However, the results from the subsurface inverse problem show more promise, especially as regards the use of p-kernels.publishedVersio

    Learning from weather and climate science to prepare for a future pandemic

    Get PDF
    Established pandemic models have yielded mixed results to track and forecast the SARS-CoV-2 pandemic. To prepare for future outbreaks, the disease-modeling community can improve their modeling capabilities by learning from the methods and insights from another arena where accurate modeling is paramount: the weather and climate research field. To prepare for future outbreaks, the disease-modeling community should draw on the methods and insights of the weather and climate research field. Image credit: Shutterstock/NASA Images. We argue that these improvements fall into four categories: model development, international comparisons, data exchange, and risk communication. A proper quantification of uncertainties in observations and models—including model assumptions, tail risks, and appropriate communication using probabilistic, Bayesian-based approaches—did not receive enough attention during the pandemic. Standardized testing and international comparison of model results is routine in climate modeling. No equivalent currently exists for pandemic models. Sharing of data is urgently needed. The homogenized real-time international data exchange, as organized by the World Meteorological Organization (WMO) since the 1960s, can serve as a role model for a global (privacy-preserving) data exchange by the World Health Organization. Lastly, researchers can look to climate change and high-impact weather forecasting to glean lessons about risk communication and the role of science in decision-making, in order to avoid common pitfalls and guide communication. Each of the four improvements is detailed here.publishedVersio

    Organizing the Indicator Zoo: Can a New Taxonomy Make It Easier for Citizen Science Data to Contribute to the United Nations Sustainable Development Goal Indicators?

    Get PDF
    In order to measure progress towards the aims outlined by the United Nations (UN) 2030 Agenda, data are needed for the different indicators that are linked to each UN Sustainable Development Goal (SDG). Where statistical or scientific data are not sufficient or available, alternative data sources, such as data from citizen science (CS) activities, could be used. Statistics Norway, together with the Norwegian Association of Local and Regional Authorities, have developed a taxonomy for classifying indicators that are intended to measure the SDGs. The purpose of this taxonomy is to sort, evaluate, and compare different SDG indicators and to assess their usefulness by identifying their central properties and characteristics. This is done by organizing central characteristics under the three dimensions of Goal, Perspective, and Quality. The taxonomy is designed in a way that can help users to find the right indicators across sectors to measure progress towards the SDGs depending on their own context and strategic priorities. The Norwegian taxonomy also offers new opportunities for the re-use of data collected through CS activities. This paper presents the taxonomy and demonstrates how it can be applied for an indicator based on a CS data set, and we also suggest further use of CS data
    • 

    corecore